\n",
"\n",
"# Introduction to Large Language Models and Generative Text\n",
"\n",
"Copyright, NLP from scratch, 2025.\n",
"\n",
"[LLMSfor.me](https://llmsfor.me)\n",
"\n",
"------------"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uuznFjxWVV0n"
},
"source": [
"## Introduction π¬\n",
"In this notebook, we will explore Large Language Models (LLMs) for generative text, and show how they can be leveraged the open source libraries from [Hugging Face](https://huggingface.co/).\n",
"\n",
"This notebook is best run in [Google Colab](https://colab.research.google.com/), where the majority of dependencies are already installed. However, if you wish to run the notebook locally, please follow the [directions for setting up a local environment](https://drive.google.com/file/d/1EV1seK-dUHRCzj2EDuu3ETAhUyjzOGRd/view?usp=drive_link) and you may then download the notebook as a `.ipynb` and run in either Jupyter or Jupyterlab.\n",
"\n",
"Since we will be using GPU in this notebook for compute-intensive tasks, please ensure that if running on Colab the runtime type is set to GPU. In the menu in Colab, select *Runtime -> Change runtime type*, then select T4 GPU (if using Colab Free) or another GPU instance type if using Colab Pro.\n",
"\n",
"
\n",
"\n",
"Though Google Colab comes with many useful data science libraries included by default (including Pytorch), the Hugging Face libraries are not, so we will first install those here using `pip`, as they will be used in the remainder of the notebook.\n",
"\n",
"- The `transformers` library, for general usage of transformer models\n",
"- The `datasets` library, for working with datasets hosted on Hugging Face\n",
"- The `accelerate` library, for using GPU for inference\n",
"- The `evaluate` library, for metrics for measuring model performance in training\n",
"- The `bitsandbytes` library for model quantization\n",
"- The `peft` library, for efficient fine-tuning of models in the second half of the workshop\n",
"- The `huggingface_hub` library, for interacting with models on the Hugging Face hub\n",
"\n",
"We will also be using custom datasets from the NLP from scratch [github repo](https://github.com/nlpfromscratch/datasets/) and so we will clone this repo to have these all available locally.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 6400,
"status": "ok",
"timestamp": 1700492726671,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "nI1qpfWOM_rQ",
"outputId": "0145fc64-19ab-43cc-a58e-d321552d7e56"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cloning into 'datasets'...\n",
"remote: Enumerating objects: 70, done.\u001b[K\n",
"remote: Counting objects: 100% (70/70), done.\u001b[K\n",
"remote: Compressing objects: 100% (62/62), done.\u001b[K\n",
"remote: Total 70 (delta 14), reused 61 (delta 8), pack-reused 0\u001b[K\n",
"Receiving objects: 100% (70/70), 34.61 MiB | 16.89 MiB/s, done.\n",
"Resolving deltas: 100% (14/14), done.\n",
"Updating files: 100% (27/27), done.\n"
]
}
],
"source": [
"!git clone https://github.com/nlpfromscratch/datasets.git"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 13815,
"status": "ok",
"timestamp": 1700492743408,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "Hvc-b3aG53dv",
"outputId": "1bd77fd7-4c0f-4a9c-a165-8271f51f3565"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: transformers in /usr/local/lib/python3.10/dist-packages (4.35.2)\n",
"Collecting datasets\n",
" Downloading datasets-2.15.0-py3-none-any.whl (521 kB)\n",
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m521.2/521.2 kB\u001b[0m \u001b[31m2.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting accelerate\n",
" Downloading accelerate-0.24.1-py3-none-any.whl (261 kB)\n",
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m261.4/261.4 kB\u001b[0m \u001b[31m9.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting evaluate\n",
" Downloading evaluate-0.4.1-py3-none-any.whl (84 kB)\n",
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m84.1/84.1 kB\u001b[0m \u001b[31m10.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting bitsandbytes\n",
" Downloading bitsandbytes-0.41.2.post2-py3-none-any.whl (92.6 MB)\n",
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m92.6/92.6 MB\u001b[0m \u001b[31m8.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting peft\n",
" Downloading peft-0.6.2-py3-none-any.whl (174 kB)\n",
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m174.7/174.7 kB\u001b[0m \u001b[31m23.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: huggingface_hub in /usr/local/lib/python3.10/dist-packages (0.19.3)\n",
"Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers) (3.13.1)\n",
"Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers) (1.23.5)\n",
"Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers) (23.2)\n",
"Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers) (6.0.1)\n",
"Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers) (2023.6.3)\n",
"Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers) (2.31.0)\n",
"Requirement already satisfied: tokenizers<0.19,>=0.14 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.15.0)\n",
"Requirement already satisfied: safetensors>=0.3.1 in /usr/local/lib/python3.10/dist-packages (from transformers) (0.4.0)\n",
"Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers) (4.66.1)\n",
"Requirement already satisfied: pyarrow>=8.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (9.0.0)\n",
"Collecting pyarrow-hotfix (from datasets)\n",
" Downloading pyarrow_hotfix-0.5-py3-none-any.whl (7.8 kB)\n",
"Collecting dill<0.3.8,>=0.3.0 (from datasets)\n",
" Downloading dill-0.3.7-py3-none-any.whl (115 kB)\n",
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m115.3/115.3 kB\u001b[0m \u001b[31m17.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets) (1.5.3)\n",
"Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets) (3.4.1)\n",
"Collecting multiprocess (from datasets)\n",
" Downloading multiprocess-0.70.15-py310-none-any.whl (134 kB)\n",
"\u001b[2K \u001b[90mββββββββββββββββββββββββββββββββββββββββ\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m17.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: fsspec[http]<=2023.10.0,>=2023.1.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2023.6.0)\n",
"Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets) (3.8.6)\n",
"Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate) (5.9.5)\n",
"Requirement already satisfied: torch>=1.10.0 in /usr/local/lib/python3.10/dist-packages (from accelerate) (2.1.0+cu118)\n",
"Collecting responses<0.19 (from evaluate)\n",
" Downloading responses-0.18.0-py3-none-any.whl (38 kB)\n",
"Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface_hub) (4.5.0)\n",
"Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (23.1.0)\n",
"Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (3.3.2)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (6.0.4)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (4.0.3)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.9.2)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.4.0)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.3.1)\n",
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (3.4)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (2.0.7)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->transformers) (2023.7.22)\n",
"Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (1.12)\n",
"Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (3.2.1)\n",
"Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (3.1.2)\n",
"Requirement already satisfied: triton==2.1.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10.0->accelerate) (2.1.0)\n",
"Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2.8.2)\n",
"Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2023.3.post1)\n",
"Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->datasets) (1.16.0)\n",
"Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.10.0->accelerate) (2.1.3)\n",
"Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.10.0->accelerate) (1.3.0)\n",
"Installing collected packages: bitsandbytes, pyarrow-hotfix, dill, responses, multiprocess, accelerate, datasets, peft, evaluate\n",
"Successfully installed accelerate-0.24.1 bitsandbytes-0.41.2.post2 datasets-2.15.0 dill-0.3.7 evaluate-0.4.1 multiprocess-0.70.15 peft-0.6.2 pyarrow-hotfix-0.5 responses-0.18.0\n"
]
}
],
"source": [
"!pip install transformers datasets accelerate evaluate bitsandbytes peft huggingface_hub"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FhHv-7NBra2C"
},
"source": [
"## Large Language Models (LLMs) and the Transformer Architecture\n",
"\n",
"What is a large language language model (LLM)? While there is no universally accepted definition for an LLM, large language models are a type of deep learning model that is understood be both very large in size (number of parameters) and also trained on very large datasets. Lately, the datasets used to train LLMs have grown increasingly in size and it is not unusual for these to represent double digit percentages of the web.\n",
"\n",
"LLMs are a type of *deep learning* model, also known as a neural network, a type of machine learning model that seeks to imitate the structure of the human brain. Traditional deep learning models take inputs - structured data found in rows and columns like in a database, or unstructured data like images, video, audio, or, in the case of natural language processing (NLP) models, free-form text - and use these to make predictions about a target variables associated with each observation (row) in the input data. For example, in the case of a deep learning models for computer vision, the input could be a dataset of images and the output to predict an associated data label (which Japanese character is this?).\n",
"\n",
"
\n",
"\n",
"Deep learning models are composed of *layers*, each of which is composed of *nodes*, and each of the nodes has inputs which come from previous layers and associated *weights* which are multiplied by each of the inputs. Collectively, these weights are referred to as the model *parameters* we mentioned earlier, and what is learned in the \"learning\" of deep learning is the optimal values for these numeric parameters to best predict the outcome.\n",
"\n",
"Large language models are a category of deep learning models with the following properties:\n",
"- They primarily work with language data (*i.e.* text), either as input, or output, or both - either solely or in conjuction with other data types (images, audio, video, etc.), in which case the LLMs are referred to as *multimodal*.\n",
"- As mentioned above, LLMs are understood to be both very large in size in both model size (hundreds of millions, billions, hundreds of billions, or even *trillions* of parameters) as well as trained on very large datasets, comprising hundreds of millions or billions of tokens.\n",
"\n",
"It is from the latter of these two properties that the remarkable capabilities of recent LLMs has arisen. There is one another important fundamental development which lead to the rise of LLMs as we know them today and that is the transformer architecture."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rpB7eCjqLHXq"
},
"source": [
"\n",
"### The Transformer Architecture\n",
"\n",
"We refer to the structure of a deep learning network as its *architecture* - the example showed in the previous section is that of the simplest type of deep learning model architecture referred to as a *feed-forward* or *fully connected* neural network, since the outputs of each node in the hidden layers serve as input for each node in the following layer.\n",
"\n",
"There are many different types of more complex and specialized architectures for specific tasks and use cases in deep learning. This have arisen from years of research in academia and application in industry. For example, convolution neural networks (CNNs) are the standard for working with computer vision tasks and have different types of layers specifically suited for this.\n",
"\n",
"The Transformer architecture is an enitrely new type of neural network architecture of which the vast majority (but not all) large language models are. The Transformer was introduced fairly recently in the paper [Attention is All You Need](https://arxiv.org/abs/1706.03762) (Vaswani et al, 2018) by researchers from Google Brain. As an interesting side note, contributors to the paper have moved on to notable AI startups - Aiden Gomez went on to be one of the founders of the OpenAI competitior [Cohere](https://en.wikipedia.org/wiki/Cohere) and Noam Shazeer to co-found [Character.ai](https://en.wikipedia.org/wiki/Character.ai). While the transformer architecture was originally devised to be applied to a text-to-text translation task (English and German) and work with what now comparatively is a very small dataset, it has now been found to be broadly applicable and highly performant to a very wide variety of natural language processing tasks and represents the state-of-the-art for most of these, ranging from text generation which will cover in this workshop, to other applications such as image generation, such as with OpenAI's [DALL-E](https://en.wikipedia.org/wiki/DALL-E) models.\n",
"\n",
"
\n",
" \n",
"
The Transformer Architecture, as originally presented in the Attention is All You Need paper
\n",
"\n",
"While the mathematical and technical details of the transformer are very complex, we will not dive too deeply into these here - there are many resources out there available which cover this - for example, [Transformers from Scratch](https://peterbloem.nl/blog/transformers) and the reimplementation of the GPT model, [MinGPT](https://github.com/karpathy/minGPT) by Andrej Karpathy, previously at Tesla and now back at OpenAI.\n",
"\n",
"At a high level, a couple important points to note before we dive into working with LLMs in code.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zgLiLnyQzZW0"
},
"source": [
"### Encoder and Decoder Models\n",
"\n",
"The transformer model is made up of two large blocks: the *encoder* block on the left, and the *decoder* block on the right. While the original transformer architecture was comprised of both of these, there are now specific models which are composed of stacked blocks of each type. For example, models in the research can be *encoder-only* or *decoder-only* models, or a fully transformer with both encoder and decoder. It should be noted that, in practice, these would all still be referred to as transformer models even though the former two are not \"full\" transformers based upon the strict definition of the architecture.\n",
"\n",
"Encoder and decoder models have different tasks to which they are well-suited. Generally speaking, encoder models take text as input and produce a higher dimensional representation of the dataset (corpus) of text - an *embedding* - on which they are trained. You will also sometime hear encoder models referred to *autoencoding* models, as they perform a similar task to the traditional [autoencoder model](https://en.wikipedia.org/wiki/Autoencoder) in deep learning.\n",
"\n",
" \n",
"
\n",
"\n",
"Decoder models, on the other hand, take inputs and produces output probabilities. Most commonly, these are known for doing text generation, where the model take a sequence of text as input and makes predictions about the most likely occuring words which come next, as made famous by the [Generative Pretrained Transformer (GPT)](https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) model by OpenAI which we will work with shortly. You will also hear decoder-only models referred to as *autoregressive* models, as they take their own outputs as inputs (in order to make predictions about a sequence of text, word-by-word) and uses these to predict probabilities for the next word (a regression task). As we will see shortly, this type of task in natural language processing is also referred to as *causal language modeling*."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5dlwb33hLSmj"
},
"source": [
"## Working with Generative Text Models"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "X-6o8ncHLJSF"
},
"source": [
"### Use Cases for Generative Text Models\n",
"\n",
"\n",
"\n",
"**Code autocompletion and AI-assisted coding**:\n",
"\n",
"\n",
"\n",
"Microsoftβs [Github Copilot](https://github.com/features/copilot) was launched in June 2022. Initially, more that ΒΌ of developersβ code files on average were generated by GitHub Copilot, and today with widespread adoption this is close to nearly half (~46%) and has been used by over 1M developers. In October 2023, Copilot [surpassed $100M](https://twitter.com/swyx/status/1711792178031460618) in annually recurring revenue.\n",
"\n",
"**Writing Assistants for creativity and copywriting**:\n",
"\n",
"\n",
"\n",
"AI writing assistants have arisen for improved productivity and content creation for marketing, sales, creative, and numerous other areas. For example, Google has made this a part of their core offerings with their announcement of [Duet AI](https://workspace.google.com/solutions/ai/) and Canva has introduced [MagicWrite](https://www.canva.com/magic-write/) based upon OpenAIβs offerings.\n",
"\n",
"**Entertainment and Social Uses:**\n",
"\n",
"\n",
"\n",
"Training generative language models on specific datasets has allowed to give them βpersonalityβ. [Character.ai](http://Character.ai) was created by developers who previously worked on Googleβs LaMDA model, offers chatbots based upon fictional characters and famous individuals. It is #2 on Anderssen- Horowitzβs list of [top 50 most popular GenAI web products](https://a16z.com/how-are-consumers-using-generative-ai/) (Sept 2023)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sHlaMxcb53dw"
},
"source": [
"### Loading our first Hugging Face Model\n",
"\n",
"In this section, we will start generating text with our first large language model, [GPT-2](https://huggingface.co/gpt2) and explore some of the parameters which affect the outputs from a generative text model.\n",
"\n",
"The GPT-2 (Generative Pre-trained Transformer 2) model was the last of the series of GPT models from OpenAI which was \"open\". Following its release in 2019, GPT-3 and subsequent models did not have their weights made available publicly (and in the case for more recent models such as GPT-4, nor the details of their training data and training process).\n",
"\n",
"We can easily work with GPT-2 in [Hugging Face](https://www.huggingface.co). The easiest way to get results as quickly as possible is to use a [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) to generate text *i.e.* to perform inference.\n",
"\n",
"First, we import the Pipeline class from the `transformers` library, then creator an instance of it, specifying the model type we wish to use. In this case, we want to use GPT-2, which is hosted Hugging Face themselves, not as part of a user repo, so the URL for it is just `gpt2`.\n",
"\n",
"Pipelines can also be for a large variety of different tasks, we must specify that the pipeline is for text generation.\n",
"\n",
"Finally, we check whether GPU is available (it should be on Colab) and if so, set the model to use GPU. This requires importing [pytorch](https://en.wikipedia.org/wiki/PyTorch) (`torch`), which is the first line of code."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 264,
"referenced_widgets": [
"7df9d1929ed545ec8c84b3ccfc8df5f8",
"aa913447fcfd4ed6a5445c2a5b0f720a",
"efe0cd4760a841eeb1f37c49356aa819",
"1cb016a6ccb94c3590dddc762d658d15",
"1b20ad4e9ddc4d22bb5b67130fc23bdc",
"4ef64ad93ec8408ba170e5857f8bf3cc",
"ac447eae143c44e7b8507fd2f3b0e3e6",
"9e9b9dc8335a45cabd072a9d77e5483d",
"173c24163d0e4303a081bda1a7dcc0d8",
"db889ef5185341cfbee6ca7344e897d5",
"0de4ccd9589d42ba9e7199669c445dd8",
"68d0c9b71e5a468c838235df379c9f0e",
"1e9b0dd9d95540acb4f5e79e773b9ab0",
"700605d7b4d24839a093a5a9291bcb7b",
"dc627573c79641f5af4832b4a0ddac01",
"2a77e525b866428fa30ed1480a7a254c",
"583bb68626cc4377abaaa67f2b9210ce",
"f24a8c5c29be47d980bdb3b3985177f6",
"7edff388bbdb462089c445405aca0b71",
"068e233507bd49d09231f9f752160ef9",
"cd5418c703fe4b36b48cf0eb9149dae6",
"b277db63c49e4585afea1bf973ed065b",
"b869ed297d8240e99418b75f107bf7c2",
"169d651e60684b8d9c4d60b9d4003c43",
"db8e720f263a46baba915102f00e6605",
"813a27933e224c169b28d560aec67651",
"4cf6615eb0a548e4a56e945297a2914b",
"93896ef082544905a3a712cb067be4cc",
"0420143153d341c49700101f3e432027",
"5c9075ee806f469394423cfc4ba87ecf",
"39967f2ff7b24527bbc60ca4c2c9c953",
"2b1dd0b98630439fb19fef0502b46ed5",
"96cffcf8e7e442dfa85cd6f7073f812f",
"5bdc37f2885242ecb15bd12bb8fd4e31",
"6603eaa3ee164465a62809159846d9d0",
"d35c27b1114f4fb0846f0020557d792e",
"bc1947bdcb1a4e07b0a4e600e0498442",
"e47c7fc7fb8a46b58b86b6de38a457c1",
"6788744a6a8a47c0a05afd57acb78fd7",
"e3d3a4309d504914a39eb5079632777b",
"63480a2e583b48babcc78da1cb1d8973",
"dba84ea67b564dddbd014b0711c34074",
"5f4c048e442248578abd03bec047dbd7",
"7c6d31beaaa34a4c848aa82bf7ff09a7",
"1d34de9b3806499095a6efe278532fa9",
"3742445c258e4361b7967d1b84c6abc5",
"9a1b4112d31c43f39727f8a8cdcdcd45",
"333fc6256ef847a0958513414300dbdd",
"78015729dff140eeaf5de8c7a44391a3",
"bdc84f74ca6e4688a73ce23f89f1265c",
"79527a1883cc4613b1603e3e2a6ad707",
"36aeaed6def44125bb3371f3654b4d2c",
"4514e07144ec4e128117c4ea6e880af6",
"d6abe4615cfe40b2bfcc8bc970066636",
"7db5d88a715e4bdfacb7cc5ce91ca811",
"f829a318742445768074993b2f1eee7a",
"7bcfe90387f943b5b63257f73079907e",
"90a0fbc050bf43fdb649b41b4caca625",
"e40426a1107f4b8eb938e37d4ee38e98",
"aaea76548ca041cc966eb130520dd818",
"374cf27d13854c92b29404e1b4687a4d",
"9b9b3555f4804ce0ac56e490614eff69",
"c4c603dcfc304d719b0a1aa46e149bd6",
"05afdfa0494b4626805f8a0caec15d23",
"3f6ad8f53233482ca9565d5aeeb59399",
"fc2609f6a3224472b4f3d4f810d9030a"
]
},
"executionInfo": {
"elapsed": 31017,
"status": "ok",
"timestamp": 1700492807799,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "NG4aOzFD8owq",
"outputId": "2f43823f-78c6-4533-ec8c-b75bf253538c"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7df9d1929ed545ec8c84b3ccfc8df5f8",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"config.json: 0%| | 0.00/665 [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "68d0c9b71e5a468c838235df379c9f0e",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"model.safetensors: 0%| | 0.00/548M [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "b869ed297d8240e99418b75f107bf7c2",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"generation_config.json: 0%| | 0.00/124 [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "5bdc37f2885242ecb15bd12bb8fd4e31",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"vocab.json: 0%| | 0.00/1.04M [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "1d34de9b3806499095a6efe278532fa9",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"merges.txt: 0%| | 0.00/456k [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "f829a318742445768074993b2f1eee7a",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer.json: 0%| | 0.00/1.36M [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'generated_text': 'I love applesauce!'}, {'generated_text': 'I love applesauce! My girlfriend even brought us all home,\" she said.\\n\\nShe didn\\'t mention the recent spate of death threats that have been lobbed at her and many of her'}, {'generated_text': \"I love applesauce! I feel great every time. I love the apple flavor but I just can't take it anymore. I've tried two other ways, maybe I'll just get it right\"}]\n"
]
}
],
"source": [
"import torch\n",
"from transformers import pipeline\n",
"\n",
"# Check if GPU is available\n",
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
"\n",
"# Create a pipeline of the GPT-2 model\n",
"gpt2_pipeline = pipeline('text-generation', model='gpt2', device=device)\n",
"\n",
"# Create 3 output generations\n",
"outputs = gpt2_pipeline(\"I love applesauce!\", max_length=40, num_return_sequences=3)\n",
"\n",
"# Display the first output\n",
"print(outputs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RTXtOypX96kY"
},
"source": [
"We can see that even though we've only a few lines of code, Hugging Face has pulled down over half a gigabyte of data! These are the [model weights for GPT-2](https://huggingface.co/gpt2/blob/main/pytorch_model.bin). For this part of the notebook, we are also using a smaller version of GPT - the full GPT-2 model, [GPT2-XL](https://huggingface.co/gpt2-xl) is ~6.5 GB!\n",
"\n",
"Let's take a look at what's in the pipeline - it will contain both a `tokenizer`, for breaking inputs up into the tokens that GPT-2 expects, as well as a `model`, in this case, our GPT-2 model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 143,
"status": "ok",
"timestamp": 1700492818172,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "4lPEw0UU96Ti",
"outputId": "fda3b041-6de7-4be8-bce0-b98bf0d0db81"
},
"outputs": [
{
"data": {
"text/plain": [
"transformers.models.gpt2.tokenization_gpt2_fast.GPT2TokenizerFast"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check the class of the tokenizer in the pipeline\n",
"type(gpt2_pipeline.tokenizer)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 148,
"status": "ok",
"timestamp": 1700492820456,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "i83t1eMy90Nb",
"outputId": "34e851b5-af81-4334-b64f-27f0fba8abbb"
},
"outputs": [
{
"data": {
"text/plain": [
"transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check the class of the model in the pipeline\n",
"type(gpt2_pipeline.model)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IlimROpL-60i"
},
"source": [
"Furthermore, we can check the number of parameters of any Hugging Face model by calling the `num_parameters` method of a model object. How many parameters (weights) does our GPT-2 model have?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 36
},
"executionInfo": {
"elapsed": 134,
"status": "ok",
"timestamp": 1700492822634,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "OiG0-igO-6Pz",
"outputId": "92ce430c-ed97-4bc3-c11f-27683ecdce4b"
},
"outputs": [
{
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'124,439,808'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Get the number of model parameters, format nicely with an f-string\n",
"f\"{gpt2_pipeline.model.num_parameters():,}\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HgOr3FNG_TfC"
},
"source": [
"Here we can see our GPT-2 model has just over 124 million parameters. Now we can move forward into generating some text using the model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wYyLP27G_gwm"
},
"source": [
"### Generating Text\n",
"In this section, we will generate some text using the GPT-2 model, and also explore the different decoding methods for doing so, and the effect they have on outputs.\n",
"\n",
"First, let us generator text from the pipeline using the default behavior. To do this, we simply pass in a string of text and no other arguments:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 482,
"status": "ok",
"timestamp": 1700492826566,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "pkMbKvSE_69J",
"outputId": "d967504c-45db-4851-804b-df7736732ffa"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'generated_text': 'The rain in Spain falls mainly in the plain, so it comes on top of us like an ointment to help us to put our nose up.\\n\\nWith the wet, there is a bit of moisture in them. The rainfall in Spain'}]\n"
]
}
],
"source": [
"my_input_string = \"The rain in Spain falls mainly in the plain\"\n",
"\n",
"# Generate output\n",
"output = gpt2_pipeline(my_input_string)\n",
"\n",
"# Display\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ecRdP52YAGQH"
},
"source": [
"We can see that the model has actually generated a `list` of outputs, each which are a dictionary. Let's take a look at the first output:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 207,
"status": "ok",
"timestamp": 1700492829725,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "f80Tt9cwANT6",
"outputId": "674a7266-8015-4a10-9e77-bf3b5cbe1d18"
},
"outputs": [
{
"data": {
"text/plain": [
"{'generated_text': 'The rain in Spain falls mainly in the plain, so it comes on top of us like an ointment to help us to put our nose up.\\n\\nWith the wet, there is a bit of moisture in them. The rainfall in Spain'}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"output[0]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eluR-uhpASA3"
},
"source": [
"This is just a dictionary with a single key, `generated_text`, which contains both the input we sent into the model, as well as the tokens the model predicted. We can display the output a little more nicely using the [Markdown](https://ipython.readthedocs.io/en/stable/api/generated/IPython.display.html#IPython.display.Markdown) object from IPython (Jupyter), to render it inline like the rest of the text in our notebook here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 89
},
"executionInfo": {
"elapsed": 135,
"status": "ok",
"timestamp": 1700492831993,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "klR2eElYAsEF",
"outputId": "66f76f3b-ae40-4ae4-bf34-96f7572f46e9"
},
"outputs": [
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain, so it comes on top of us like an ointment to help us to put our nose up.\n",
"\n",
"With the wet, there is a bit of moisture in them. The rainfall in Spain"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from IPython.display import Markdown\n",
"\n",
"display(Markdown(\"---\")) # dividing line\n",
"display(Markdown(output[0]['generated_text']))\n",
"display(Markdown(\"---\")) # dividing line"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ryM6HHNCA2Wx"
},
"source": [
"There, that's better! Now the text is displayed nicely with dividers. Let's move on now to different parameters we have at our disposal for how a model generates text, or as in the language of LLMs, different *decoding strategies*."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eEmAIq7ky7dj"
},
"source": [
"### Text Decoding Strategies\n",
"\n",
"As we will see in this section, there is some complexity to creating text outputs with generative language models. Creating new outputs from a given prompt is not as simple as entering the input and getting a predicted output. Generative text models have parameters which control the amount of variability in their outputs; this is a desirable quality to make the outputs seem both more realistic (as if from a human) and variety being injected into the model outputs also increases the likelihood of reaching a novel result that is pleasing to the user and deemed to be \"good\".\n",
"\n",
"First, we will consider the simplest (vanilla) text generation approaches in order to both gradually work our way up, and also contrast with, using them with methods which introduce variety and \"creativity\". The two simplest decoding methods for text generation we will consider first are *greedy search* and *beam search*.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9_RgA715ClwR"
},
"source": [
"#### Greedy Search\n",
"\n",
"Greedy search is the simplest text generation approach: in this case, no variety is introduced as all. Recall a text generation model takes a sequence of input tokens and its task is to predict the next token given the input. For greedy search, the next predicted token is always just that with the highest probability.\n",
"\n",
"
\n",
"\n",
"
\n",
"
Greedy Search. Here, for the next two tokens the words \"plain\" and \"which\" are selected, as they have the highest individual probabilities.
\n",
"\n",
"Mathematically speaking, given an input sequence of tokens $x_1, x_2, x_3...$, the model seeks to produce an output $y_t$ at step $t$. Since generative text models (decoder models) are *autoregressive* and make predictions based upon previous predictions after the initial input, mathematically we can express the prediction task as:\n",
"\n",
"$ P(y_t|y_1, y_2, ..., y_{t-1},x)$\n",
"\n",
"Greedy search just takes the highest probability token for each prediction. Thus for the vocabulary and different calculated probabilities by the model, this is expressed mathmeatically as:\n",
"\n",
"$y_t = argmax_{y \\in V}P(y|y_1,y_2,...,y_{t-1},x)$\n",
"\n",
"Let's take a look at this with GPT-2, to do this we will play around with the [parameters](https://huggingface.co/docs/transformers/generation_strategies#customize-text-generation) we can pass to the call to `.generate` on our model in Hugging Face.\n",
"\n",
"Now that we are stepping outside of the pipeline abstraction and working in more detail, we should probably initialize a tokenizer and model, and work with these separately, passing the outputs of the tokenizer to the model directly. To do this, we will be leveraging some of the [Auto Classes](https://huggingface.co/docs/transformers/model_doc/auto) in Hugging Face.\n",
"\n",
"Since we are doing text generation, *i.e.* [causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling), we will using the `AutoModelforCausalLM` class to create the GPT-2 model, as well as creating a tokenizer using `AutoTokenizer`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6YjdHo06GH_1"
},
"outputs": [],
"source": [
"from transformers import AutoTokenizer, AutoModelForCausalLM\n",
"\n",
"# Instantiate the tokenizer\n",
"tokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\n",
"\n",
"# add the EOS token as PAD token to avoid warnings\n",
"model = AutoModelForCausalLM.from_pretrained(\"gpt2\", pad_token_id=tokenizer.eos_token_id).to(device)\n",
"\n",
"# Text input string\n",
"input_string = \"The rain in Spain falls mainly in the plain\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Q-t6vXODTYXr"
},
"source": [
"Great, now we have the tokenizer, model, and input string. We pass the input string into the tokenizer to get a back a list of token ids, as well as the attention mask for the transformer:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OS1bCiCHTXSQ"
},
"outputs": [],
"source": [
"# encode context the generation is conditioned on\n",
"model_inputs = tokenizer(input_string, return_tensors='pt').to(device)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 6,
"status": "ok",
"timestamp": 1700492852545,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "18GRfjvoTjjl",
"outputId": "cedafc1e-3a14-4c68-f7d3-ec758ac935e1"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'input_ids': tensor([[ 464, 6290, 287, 8602, 8953, 8384, 287, 262, 8631]],\n",
" device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')}\n"
]
}
],
"source": [
"# What is the result?\n",
"print(model_inputs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "M6lehyjRTmYs"
},
"source": [
"We then pass this to the model method `generate`. Here we use the \"double-star\" syntax, where the dictionary that is passed in is \"unpacked\" by python, so the function receives separate arguments for `input_ids` and `attention_mask` from the associated values. Let's take a look at the result:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 675,
"status": "ok",
"timestamp": 1700492856018,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "ZvXdukKpRxNj",
"outputId": "77f9b27c-ac53-4df8-cc7c-6fc596b64430"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1473: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )\n",
" warnings.warn(\n",
"/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1273: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[ 464, 6290, 287, 8602, 8953, 8384, 287, 262, 8631, 286,\n",
" 262, 50206, 12010, 11, 475, 340, 318, 635, 287, 262]],\n",
" device='cuda:0')\n"
]
}
],
"source": [
"# Do greedy generation to generate the output token ids\n",
"greedy_output = model.generate(**model_inputs)\n",
"\n",
"print(greedy_output)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tRWp-4VUUO3V"
},
"source": [
"We can see that the result is just a list of integers. These are the token ids that were predicted by the model as the next most likely, based upon the tokenizer vocabulary. So we can convert these token ids back into text by passing them through the tokenizer as a final step:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 66
},
"executionInfo": {
"elapsed": 8,
"status": "ok",
"timestamp": 1700492858407,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "02Xae7mmUPff",
"outputId": "7e16e650-5a07-46d5-86ac-a0e88e104dbc"
},
"outputs": [
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of the Canary Islands, but it is also in the"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Decode the tokens back to text using the tokenizer\n",
"output_string = tokenizer.decode(greedy_output[0])\n",
"\n",
"# Print the result\n",
"display(Markdown(\"---\")) # dividing line\n",
"display(Markdown(output_string))\n",
"display(Markdown(\"---\")) # dividing line"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SBAjuXYvUjLd"
},
"source": [
"And that's it! The whole text generation process goes like this:\n",
"1. Instantiate tokenizer and model\n",
"2. Pass input string to tokenizer to generate token ids and attention mask\n",
"3. Generate output token ids (predictions) from the model\n",
"4. Decode output token ids back into text using tokenizer\n",
"\n",
"We can visualize the whole process with the figure below:\n",
"\n",
"
\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YEgQirNNbTTL"
},
"source": [
"It should be noted that with greedy search, we will always be picking the most likely output tokens, and so the final result will be completely determinstic and the same each time. We can see this with the behavior of the model below by generating the same output over and over:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5vM1XzhkbQCJ"
},
"outputs": [],
"source": [
"# Initial generation\n",
"greedy_output = model.generate(**model_inputs)\n",
"output_string = tokenizer.decode(greedy_output[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 5,
"status": "ok",
"timestamp": 1700492866494,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "LO2cKTW4bunN",
"outputId": "6b2ea702-c31e-4dd7-ba81-077a50172a95"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The rain in Spain falls mainly in the plain of the Canary Islands, but it is also in the\n"
]
}
],
"source": [
"# Output\n",
"print(output_string)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "EyUTG0NmbyHs"
},
"outputs": [],
"source": [
"# Second generation\n",
"greedy_output2 = model.generate(**model_inputs)\n",
"output_string2 = tokenizer.decode(greedy_output2[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 130,
"status": "ok",
"timestamp": 1700492871272,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "fZXq6De8b5IZ",
"outputId": "f90d6e00-ca72-420b-c292-12923a10fd76"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The rain in Spain falls mainly in the plain of the Canary Islands, but it is also in the\n"
]
}
],
"source": [
"# Output\n",
"print(output_string2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "s25JZeoCcBDO"
},
"source": [
"We can see that we will always get the same result as an output here based on the model. Let us now explore other approaches for generating text which generate the series of output tokens based on different approaches."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yEDuh9ysGIoy"
},
"source": [
"#### Beam Search\n",
"\n",
"Beam search is an improvement on greedy search which considers the most likely sequence of tokens *together*, based on their respective probabilities, as opposed to just taking the most probable individual token at each timestep.\n",
"\n",
"A *beam width* is specified, and over the width of the beam (number of generated tokens), the combination of tokens with the highest collective probability is selected, as opposed to just selecting the individual token with the highest probability, as with greedy search.\n",
"\n",
"
\n",
"\n",
"
\n",
"
Beam Search. Here, for the next two tokens the words \"meadow\" and \"grasses\" are selected, as they joint probability of 0.36 (0.4 x 0.9) is greater than that of the tokens selected in greedy search which is 0.33 (0.6 x 0.55).
\n",
"\n",
"A couple points to note about beam search is that searching over a larger sequence of tokens (*i.e.* increasing `beam_size`) will result in significantly improved quality of outputs at the cost of increased computation.\n",
"\n",
"There is a \"law of diminishing returns\" with beam search: typically there is a saturation point beyond which increasing the beam size does not significantly change the most likely generated sequence, as the probabilities are dominated by the product of the most frequently occurring tokens in the sequence considered by beam search.\n",
"\n",
"Generally speaking, beam search can lead to repetitive outputs for open-ended generation. This is why it and greedy search are used in conjuction with sampling."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Jb1xyHrOumWy"
},
"source": [
"To generate text with beam search in Hugging Face, we set the `num_beams` parameter to a value greater than 1 (which would be equivalent to greedy search) and `early_stopping=True`, so generation finishes when all beams pass back an \"end of string\" (EOS) token.\n",
"\n",
"We have already created our tokenizer and model, so this can just be done in the call to `model.generate()`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gi98BPxVulNE"
},
"outputs": [],
"source": [
"# Text input string\n",
"input_string = \"The rain in Spain falls mainly in the plain\"\n",
"\n",
"# Model input\n",
"model_inputs = tokenizer(input_string, return_tensors='pt').to(device)\n",
"\n",
"# Generate output with beam search\n",
"greedy_output = model.generate(**model_inputs, num_beams=10, early_stopping=True)\n",
"\n",
"# Decode the output\n",
"output_string = tokenizer.decode(greedy_output[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 46
},
"executionInfo": {
"elapsed": 131,
"status": "ok",
"timestamp": 1700492879956,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "1NKEP0qvvaUi",
"outputId": "7a08be15-bcdb-4834-8441-4dd68943e5e2"
},
"outputs": [
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain to the south of the city of Barcelona.\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display(Markdown(output_string))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gPHE6_KkvkFv"
},
"source": [
"We can see that beam search has returned quite a different result from that of greedy search, by looking over the collective probabilities of a number of predicted token possibilities, instead of just each following token."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iI2-0NaV53dx"
},
"source": [
"#### Sampling Strategies\n",
"\n",
"While the different search decoding strategies provide some varaibility in the outputs of a generative text model, they are still determinisitc in their outputs, and this can lead to either a.) poor outputs or b.) repeated identical outputs, the latter of which is not a desireable traits for end users.\n",
"\n",
"As such, there also exist different *sampling strategies* for introducing variability and novelty into the outputs of generative text models. The three main parameters available for different sampling strategies are *temperature*, *top-p,* and *top-k*.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YJK4wlqFwrDA"
},
"source": [
"##### Temperature\n",
"\n",
"The temperature is a factor which normalizes or \"smooths out\" the output probabilities of predicted tokens. In practice, it is used to control the variability (or randomness, or \"creativity\") of the outputs of a model.\n",
"\n",
"Mathematically speaking, calculating the model probability for predicting any individual token as the next one, such that all probabilities lie between zero and one and sum to one, is attained using the softmax function:\n",
"\n",
"$ P(y_i) = \\frac{e^{z_i}}{\\sum_{j=1}^{N}e^{z_j}} $\n",
"\n",
"where:\n",
"- $P(y_i)$ is the probability of selecting the $i$th token.\n",
"- $z_i$ is the logit, the raw score or output, from the model for token $i$\n",
"- and $N$ is the total size of the vocabulary\n",
"\n",
"we introduce a new variable $\\tau$ for temperature and update the probability formula as below:\n",
"\n",
"$P(y_i) = \\frac{e^{z_i / \\tau}}{\\sum_{j=1}^{N}e^{z_j / \\tau}}$\n",
"\n",
"Given the above, if $\\tau = 1$, the formula for the probabilities, and thus the behavior of the model, is unchanged. It can be shown that as $\\tau \\to \\infty$, that $P(y_i) \\to 1$ for all $i$, and so the likelihood of any token predicted becomes equal. This results in a completely uniform distribution of probabilities acrosss all possible tokens.\n",
"\n",
"On the other hand, as $\\tau \\to 0$, the probability for any given token can be represented by:\n",
"\n",
"$$\n",
"P(i)=\\begin{cases}\n",
" 1 & \\text{if $i$ is max probability}\\\\\n",
" 0 & \\text{otherwise}\n",
" \\end{cases}\n",
"$$\n",
"\n",
"That is to say, the most likely token will have a probability of 1, and then others will have their probabilities set to 0, and the output of the model will be completely deterministic.\n",
"\n",
"To put in another way, setting a low value to temperature (value of 0) means that the most likely next tokens will always be returned, whereas setting higher values to temperature flattens the probabilities across the different possible tokens, resulting in increasingly random outputs for greater values of $\\tau$.\n",
"\n",
"\n",
"This is visualized in the figure below:\n",
"\n",
"
\n",
"\n",
"
\n",
"
Visualizing the effect of changing temperature on next token probabilities
\n",
"\n",
"There is a balance to be struck, as too low a temperature will result in a model always returning the same output for a given input - that is, acting deterministically - whereas setting the temperature too high can result in garbled and incoherent."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YwuaLcbX_UB-"
},
"source": [
"Now let's try experimenting with changing the temperature parameter for text geenration using GPT-2. In Hugging Face, this is controlled by the `temperature` parameter in either calls to a model pipeline, in directly in the text generation call in `model.generate()`. We must also set the `do_sample=True` argument, to tell Hugging Face to use sampling and not to do greedy search.\n",
"\n",
"First, let's set a temperature (close to that) of 0, which will always result in the most likely token be chosen. Note that this is equivalent to greedy search:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 112
},
"executionInfo": {
"elapsed": 689,
"status": "ok",
"timestamp": 1700492888113,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "AiAA5zi-_p4q",
"outputId": "b3be2da0-78ac-4188-ff2e-8c0a8f8cf044"
},
"outputs": [
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of the Canary Islands, but it is also in the"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of the Canary Islands, but it is also in the"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of the Canary Islands, but it is also in the"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Text input string\n",
"input_string = \"The rain in Spain falls mainly in the plain\"\n",
"\n",
"# Generation = temperature ~= 0 - deterministic\n",
"model_inputs = tokenizer(input_string, return_tensors='pt').to(device)\n",
"zero_temp_output = model.generate(**model_inputs, temperature=0.00001, do_sample=True, num_return_sequences=3)\n",
"\n",
"# Iterate over outputs and display in markdown\n",
"display(Markdown(\"---\"))\n",
"\n",
"for output in zero_temp_output:\n",
" output_string = tokenizer.decode(output)\n",
" display(Markdown(output_string))\n",
"\n",
"display(Markdown(\"---\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BqmxxbfIGiHk"
},
"source": [
"We see that the same output is returned as before, and we can run the above cell multiple times and always get back the same input. Now let's set the temperature to 1, which will leave the next token probabilities unchanged. In this case, we should be able to get different outputs:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 112
},
"executionInfo": {
"elapsed": 163,
"status": "ok",
"timestamp": 1700492893710,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "GA7GMbtpAUx8",
"outputId": "43dfe861-761e-4b98-86a3-77ef43998dfd"
},
"outputs": [
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of Barcelona city centre in the north. It rains on"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of SΓ³lΓΊm where the rivers end in"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of Iberian Peninsula, to the south of the"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Text input string\n",
"input_string = \"The rain in Spain falls mainly in the plain\"\n",
"\n",
"# Generation: temperature = 1, default behavior\n",
"model_inputs = tokenizer(input_string, return_tensors='pt').to(device)\n",
"temp1_output = model.generate(**model_inputs, temperature=1, do_sample=True, num_return_sequences=3)\n",
"\n",
"# Iterate over outputs and display in markdown\n",
"display(Markdown(\"---\"))\n",
"\n",
"for output in temp1_output:\n",
" output_string = tokenizer.decode(output)\n",
" display(Markdown(output_string))\n",
"\n",
"display(Markdown(\"---\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FepkJD38AV_I"
},
"source": [
"Cool, those all seem like reasonable outputs, even though they are all different. We have introduced some variability into the model outputs which makes for novelty.\n",
"\n",
"Finally, let's really crank up the temperature! This will make all output tokens equally likely, resulting in very \"creative\" outputs:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 112
},
"executionInfo": {
"elapsed": 246,
"status": "ok",
"timestamp": 1700492908650,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "f8Xp8KRIAdan",
"outputId": "f0c9c48e-71c3-4181-a10f-f28214277850"
},
"outputs": [
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plainlands as it drihes back in off France, or"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain regions like Ligue D (near Basir da M"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain, just behind what seemed less developed communities like Castil"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Text input string\n",
"input_string = \"The rain in Spain falls mainly in the plain\"\n",
"\n",
"# Generation: temperature = 1B, all tokens equally likely\n",
"model_inputs = tokenizer(input_string, return_tensors='pt').to(device)\n",
"high_temp_output = model.generate(**model_inputs, temperature=1.0e9, do_sample=True, num_return_sequences=3)\n",
"\n",
"# Iterate over outputs and display in markdown\n",
"display(Markdown(\"---\"))\n",
"\n",
"for output in high_temp_output:\n",
" output_string = tokenizer.decode(output)\n",
" display(Markdown(output_string))\n",
"\n",
"display(Markdown(\"---\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NwTTbc5WH7yc"
},
"source": [
"As we can see above, setting a high value for temperature results in more \"creative\" outputs but some of these are less coherent than those with lower temperature.\n",
"\n",
"Now let us consider further sampling strategies for introducing variability in model outputs whilst attempting to maintain the quality thereof."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iNbJNYEDz7Vp"
},
"source": [
"##### Top-p & Top-k sampling\n",
"\n",
"Unlike temperature, which changes the different calculated probabilities of the next token, *top-p* and *top-k* instead function by reducing the size of the set of possible tokens to choose from. Though are differently in how they are applied, they both restrict the set of possible next tokens to only the most likely ones above a specified threshold, and then redistribute the probability mass amongst this smaller set. They are typically used in conjunction with temperature to produce varied but still comprehensible outputs."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IbO9-lrAn_UJ"
},
"source": [
"In *top-k* sampling, instead of calculating probabilities and sampling from all possible tokens, a cutoff integer value $k$ is specified, and only the top $k$ ranked tokens are used as the set of possible next tokens. The total probability (summing to 1) is redistributed amongst these top $k$ tokens.\n",
"\n",
"This is illustated in the figure below. Instead of choosing from all possible next words, only the top 5 words would be considered, and the probabilities would be redistributed amongst them:\n",
"\n",
"
\n",
"
\n",
"
Top-k sampling: only the most probable tokens above and including rank $k$ are kept
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pnJgAvpJoApB"
},
"source": [
"*Top-p*, or *nucleus sampling* differs in that instead of specifying a rank $k$ and taking the most probable tokens this rank or above, in top-p a probability threshold $p$ is specified, and only the top tokens which a combined probability above this threshold are kept in the set of next possible tokens. This differs from top-k in that we don't specify the size of the set of next tokens, only the total probability.\n",
"\n",
"Coming back to our previous example, here using top-p, we wish only to keep tokens which have a combined probability equal to or above a threshold 0.8. In this case the top four most likely next tokens meet this criteria (as $0.5 + 0.15 + 0.1 + 0.05 = 0.8$) so the total probabilty would be redistributed only amongst them:\n",
"\n",
"
\n",
"
\n",
"
Top-p sampling: only the tokens with cumulative probability above the specified threshold $p$ are kept
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8JzBA8_RpFod"
},
"source": [
"In Hugging Face, top-k and top-p sampling can be used by specifying them in with the arguments `top_k` and `top_p` respectively. `top_k` is an integer value, and `top_p` a floating point between 0 and 1.\n",
"\n",
"Note that both of these will still just return the most likely sequences (deterministically) and so should be combined with beam search and/or temperature. These allows returning multiple outputs with `num_return_sequences` as we've seen before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 248
},
"executionInfo": {
"elapsed": 1028,
"status": "ok",
"timestamp": 1700492924857,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "7OsY0Fxpp3Jd",
"outputId": "b2da9a4c-e231-4075-e201-68e1a15ca4b3"
},
"outputs": [
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"Top-k, $k=30$:"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of Catalonia, an important city which lies along the coast"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of Sertra on a Saturday, but this week"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain which is part of the Caja Rural (Cran"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"Top p, $p=0.5$:"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain, and there are no visible signs of rain in the"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of the Canary Islands, which is the home of the"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of Madrid, but in the mountains of the Andalus"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Text input string\n",
"input_string = \"The rain in Spain falls mainly in the plain\"\n",
"model_inputs = tokenizer(input_string, return_tensors='pt').to(device)\n",
"\n",
"# Generation - Top-k & Top-p\n",
"top_k_output = model.generate(**model_inputs, top_k=30, do_sample=True, num_return_sequences=3)\n",
"top_p_output = model.generate(**model_inputs, top_p=0.5, do_sample=True, num_return_sequences=3)\n",
"\n",
"# Top K\n",
"display(Markdown(\"---\"))\n",
"display(Markdown(\"Top-k, $k=30$:\"))\n",
"for output in top_k_output:\n",
" output_string = tokenizer.decode(output)\n",
" display(Markdown(output_string))\n",
"\n",
"# Low Top K\n",
"display(Markdown(\"---\"))\n",
"display(Markdown(\"Top p, $p=0.5$:\"))\n",
"for output in top_p_output:\n",
" output_string = tokenizer.decode(output)\n",
" display(Markdown(output_string))\n",
"display(Markdown(\"---\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LiHUoVf5p16Q"
},
"source": [
"Top-p and Top-K can be used in conjunction, to avoid very low ranked words while allowing for variability. In pratice, this requires a fair bit of trial and error to find good values for $k$ and/or $p$, combined with temperature."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 112
},
"executionInfo": {
"elapsed": 445,
"status": "ok",
"timestamp": 1700492935027,
"user": {
"displayName": "NLP from scratch",
"userId": "13636460506782883737"
},
"user_tz": 300
},
"id": "wYP6Io4AukbE",
"outputId": "a53e6cb7-c8ac-4c53-81b7-f45d98c45097"
},
"outputs": [
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain, which is covered with a thick layer of thick layer"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain of the Andalusia Mountains, which have been a"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"The rain in Spain falls mainly in the plain. The city's roads are covered with asphalt, and"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/markdown": [
"---"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Putting it all together\n",
"outputs = model.generate(\n",
" **model_inputs,\n",
" do_sample=True,\n",
" top_k=30,\n",
" top_p=0.5,\n",
" temperature=1.5,\n",
" num_return_sequences=3,\n",
")\n",
"\n",
"display(Markdown(\"---\"))\n",
"for output in outputs:\n",
" output_string = tokenizer.decode(output)\n",
" display(Markdown(output_string))\n",
"display(Markdown(\"---\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dl9r_FqlLov4"
},
"source": [
"## Working with a Chat-style model: LLaMA 3.2\n",
"\n",
"Now that we have worked with a basic generative text model, we will move on to working with a modern LLM with a \"chat\" style model. In this section we will use LLaMA 3.2, from Meta's incredibly popular open source [LLaMA](https://www.llama.com/) series of models.\n",
"\n",
"A chat-style model (or \"instruct\" model as they are also referred to, as they receive instructions from the user) actually functions exactly the same as a regular generative text model such as GPT-2, the only difference is in the training data and the way the model outputs are displayed.\n",
"\n",
"The models were trained on JSON which contain conversations, with three different roles:\n",
"- The **user** role: This is us, or the person talking to the chat bot.\n",
"- The **assistant** role: These are the responses from the model.\n",
"- The **system** role: This is a role that dictates the overall behavior of the model and style of its responses.\n",
"\n",
"So, in a way, an chat-style model is not actually responding, but applying the \"autcomplete on steroids\" of regular generative text models, just in this case autcompleting a conversation, and we are only show the responses from the *assistant* role, and we provide the responses for the *user* role."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_0dLJS75NBWA"
},
"source": [
"Let's see this in action with LLaMA 3.2. Unfortunately, using the LLaMA series of models from the [official Hugging Face repos](https://huggingface.co/meta-llama) requires accepting a [license](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt) and user agreement, which therefore means they cannot be used anonymously (*i.e.* without a HF account).\n",
"\n",
"Fortunately, a copy of 1B parameter version of LLaMA-3.2-Instruct is provided by Unsloth in their repo at https://huggingface.co/unsloth/Llama-3.2-1B-Instruct.\n",
"\n",
"Let's test it out using a `pipeline`. Here the model is trained differently, so we have to provide the text in the JSON format it expects with *system* and *user* roles:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 246,
"referenced_widgets": [
"400a16eacf9647958e8d79d723bd950c",
"a31c1399a56a43e194f0d4b1e2ca5f32",
"e1dc0bb03e0d442d8ffbb028b8c3ff77",
"2a9a02bd758a40fb9bcb08adb21aeb6e",
"64ee92b777214dabb6b4981d7f4a7e4d",
"99c59f691b8e4e78b0922389bfcbc070",
"1425197aca1d466fa54a94eb5075487c",
"1911cb19597b49c8bed6d21962af5239",
"e5beb10d76fe40f58650bce31c02cb4d",
"e335ecd631254cf78cef57b94d8b7961",
"cb03fd36b19f4c98876625c6320a8a32",
"77a6823ca3704f76bc693ab663ab46a6",
"065d20e83f5443d58740da60b0855613",
"75e38f6075ef4b4bb5fc3d05dfa2c7a3",
"5cb4b3c8407b422c92431a8e1bfe7bf0",
"d5f912998fa04eb5b5ad8d6051a9d739",
"ee28f49002964b4ab44edb1f61cfb077",
"e415a6dc96f143fd9e12e05905a325e7",
"96eaad0070dc485f95b5090185b9f252",
"304e79b892524faabab20121628f9049",
"aa0f9e0a59f84562b5c6dc62b81f8fdb",
"c737b5430375486aa5e5a77e84d684ba",
"99103892ab9945f99e696bfd6f0f9b15",
"e006123cab624724beee682a2b78c218",
"212f672af600484191a864ddaa026f9a",
"b96af951f18642ecb800299cc056b67a",
"b2696b1f8c6348a68c8ef9116a5a4ed1",
"af78fa6e7ffe4c6baef5c7522fc57154",
"0a26bff5509d46e391a9f2e51ec05085",
"c86f4d6e9e8b4caca04fa605762d2cf3",
"857471c091f0421d879bf416068dbab1",
"a1f7e2e022ad4b5e84019c85847b3022",
"b9e270dc44304ab4acc6ff7384706ce7",
"58d9df19503849a39d6a677d71c7d59d",
"b48cd1a9c8a2466da20d2f71dfcc3bc0",
"839931b9516246f6ba63e42cbd3c2fb2",
"13e3c081e0c147c1b1c198f8b9844240",
"9ec4be8031c540a9b1fac2b4f1de5610",
"0da24a8dcd9c46bc978788db492cd63f",
"65e14bb2e8e346b08910ce493ee9366b",
"059b3db8448348c285c195901179f7db",
"119556d2ea08451a89cd2e1f464e619f",
"e5cb2824726c4beeac2bad239e21805e",
"b57c1d3b95ab4a21bbaa3185b6f935b7",
"38808b87b7474ee0a36069d387c64c01",
"0eb4f8ec71ea40178dc95cfe4459528e",
"a369b9c708be4c219720db7004ea657e",
"7da683cd80dc41a9af7ddc6b36c5cedf",
"2ef779152fbf4cac8e24436c796a7780",
"c0312c14f08b48ad855dd7758bcb99b4",
"6d0be0aa12fe4557871cb2b14e4a906e",
"3f21d8ec2ab34f65ac4ce94110bebf4f",
"78ab2329436441f69896f940fde518d2",
"2ea2048f0be44f5dbe77a20f92ac16eb",
"e7b23854c9e34cc5baa675c9fe4eeeb0",
"c40a7b7eca514f84b5693a0e0e328f39",
"d3428d1e6172449ba94ed420f84229e5",
"1a84275f18f24ff488b8974dd859b899",
"296cdc80eb5640e797eeeb3475107b0d",
"1fb5df80bf8e4d69a0e6531b3ba250d6",
"e291b4932a354e2b8afd33962223f584",
"b84ed55a1c1540f39fce769dfe3376d8",
"a0f7aa50340a420a8e1c625328753253",
"a639522a5a5341cfa1b9f2221a0f9f87",
"a1dc6efb831144ba881dd9a051ddede5",
"a40387ca022b42bbaf65b8fbcae3d7a9"
]
},
"id": "N9_sB3LuLfLm",
"outputId": "478c5fa4-f2a4-4568-d432-323320745f05"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "400a16eacf9647958e8d79d723bd950c",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"config.json: 0%| | 0.00/927 [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "77a6823ca3704f76bc693ab663ab46a6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"model.safetensors: 0%| | 0.00/2.47G [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "99103892ab9945f99e696bfd6f0f9b15",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"generation_config.json: 0%| | 0.00/184 [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "58d9df19503849a39d6a677d71c7d59d",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer_config.json: 0%| | 0.00/54.6k [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "38808b87b7474ee0a36069d387c64c01",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer.json: 0%| | 0.00/9.09M [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c40a7b7eca514f84b5693a0e0e328f39",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"special_tokens_map.json: 0%| | 0.00/454 [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generation: [{'role': 'system', 'content': 'You are a helpful assistant, that responds as a pirate.'}, {'role': 'user', 'content': 'Write a poem about applesauce.'}, {'role': 'assistant', 'content': \"Yer lookin' fer a poem about applesauce, eh?\\n\\nOh, applesauce, sweet and fine,\\nA treasure from the orchard's vine.\\nIn jars or pouches, ye be stored,\\nA treasure fer young and old.\\n\\nMe mouth waters at the thought o' thee,\\nA taste o' autumn, wild and free.\\nNo need fer sugar, no need fer spice,\\nJust applesauce, a simple, sweet device.\\n\\nIn the morning, on yer toast or bread,\\nA spoonful o' applesauce, a treat ahead.\\nOr in a stew, or in a pie,\\nApplesauce, a flavor that never dies.\\n\\nSo here's to applesauce, me hearty friend,\\nA treasure that never doth end.\\nMay yer belly be full, and yer heart be light,\\nWith applesauce, the perfect delight!\"}]\n"
]
}
],
"source": [
"import torch\n",
"device = \"cuda\"\n",
"llama_32 = \"unsloth/Llama-3.2-1B-Instruct\"\n",
"\n",
"prompt = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant, that responds as a pirate.\"},\n",
" {\"role\": \"user\", \"content\": \"Write a poem about applesauce.\"},\n",
"]\n",
"\n",
"generator = pipeline(model=llama_32, device=device, torch_dtype=torch.bfloat16)\n",
"generation = generator(\n",
" prompt,\n",
" do_sample=False,\n",
" temperature=1.0,\n",
" top_p=1,\n",
" max_new_tokens=1000\n",
")\n",
"\n",
"print(f\"Generation: {generation[0]['generated_text']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ixAmFw2_PJAG"
},
"source": [
"Now to get the model response, we just print the generated text for the *assistant* role - the 3rd element in the array:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 140
},
"id": "bACUqTvqPIrC",
"outputId": "0e03b1a6-8ade-4e03-aaab-07fa74f8d0a2"
},
"outputs": [
{
"data": {
"text/markdown": [
"Yer lookin' fer a poem about applesauce, eh?\n",
"\n",
"Oh, applesauce, sweet and fine,\n",
"A treasure from the orchard's vine.\n",
"In jars or pouches, ye be stored,\n",
"A treasure fer young and old.\n",
"\n",
"Me mouth waters at the thought o' thee,\n",
"A taste o' autumn, wild and free.\n",
"No need fer sugar, no need fer spice,\n",
"Just applesauce, a simple, sweet device.\n",
"\n",
"In the morning, on yer toast or bread,\n",
"A spoonful o' applesauce, a treat ahead.\n",
"Or in a stew, or in a pie,\n",
"Applesauce, a flavor that never dies.\n",
"\n",
"So here's to applesauce, me hearty friend,\n",
"A treasure that never doth end.\n",
"May yer belly be full, and yer heart be light,\n",
"With applesauce, the perfect delight!"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display(Markdown(generation[0]['generated_text'][2]['content']))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pUskBES0kT5P"
},
"source": [
"Here we have only seen a single input and response from the model, but this is, in principle how chat-based applications like ChatGPT work."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PiIkj0dLc9Gy"
},
"source": [
"## Conclusion\n",
"This concludes Part 1. Next week in Part 2, we will continue where we left off and fine-tune and LLM with a custom dataset."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"----\n",
"\n",
"